Consistent and asymptotically normal parameter estimates for hidden Markov mixtures of Markov models

نویسندگان

چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Parameter estimation in pair hidden Markov models

This paper deals with parameter estimation in pair hidden Markov models (pairHMMs). We first provide a rigorous formalism for these models and discuss possible definitions of likelihoods. The model being biologically motivated, some restrictions with respect to the full parameter space naturally occur. Existence of two different Information divergence rates is established and divergence propert...

متن کامل

Bayesian posterior mean estimates for Poisson hidden Markov models

This paper focuses on the Bayesian posterior mean estimates (or Bayes’ estimate) of the parameter set of Poisson hidden Markov models in which the observation sequence is generated by a Poisson distribution whose parameter depends on the underlining discrete-time time-homogeneous Markov chain. Although the most commonly used procedures for obtaining parameter estimates for hidden Markov models ...

متن کامل

Estimating Components in Finite Mixtures and Hidden Markov Models

When the unobservable Markov chain in a hidden Markov model is stationary the marginal distribution of the observations is a finite mixture with the number of terms equal to the number of the states of the Markov chain. This suggests estimating the number of states of the unobservable Markov chain by determining the number of mixture components in the marginal distribution. We therefore present...

متن کامل

Parameter Estimation for Hidden Markov Models with Intractable Likelihoods

Approximate Bayesian computation (ABC) is a popular technique for approximating likelihoods and is often used in parameter estimation when the likelihood functions are analytically intractable. Although the use of ABC is widespread in many fields, there has been little investigation of the theoretical properties of the resulting estimators. In this paper we give a theoretical analysis of the as...

متن کامل

Fast Training Convergence Using Mixtures of Hidden Markov Models

In this paper, we describe a method for mixing an arbitrary number of discrete fixed-structure Hidden Markov Models such that we retain their collective training experience. We show that, when presented with a novel data sequence, this new mixture model converges in significantly fewer iterations than either a randomly initialized model or any one of the mixture’s component models. We also expl...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Bernoulli

سال: 2005

ISSN: 1350-7265

DOI: 10.3150/bj/1110228244